skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Chen, Hanning"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available March 1, 2026
  2. With the increasing integration of machine learning into IoT devices, managing energy consumption and data transmission has become a critical challenge. Many IoT applications depend on complex computations performed on server-side infrastructure, necessitating efficient methods to reduce unnecessary data transmission. One promising solution involves deploying compact machine learning models near sensors, enabling intelligent identification and transmission of only relevant data frames. However, existing near-sensor models lack adaptability, as they require extensive pre-training and are often rigidly configured prior to deployment. This paper proposes a novel framework that fuses online learning, active learning, and knowledge distillation to enable adaptive, resource-efficient near-sensor intelligence. Our approach allows near-sensor models to dynamically fine-tune their parameters post-deployment using online learning, eliminating the need for extensive pre-labeling and training. Through a sequential training and execution process, the framework achieves continuous adaptability without prior knowledge of the deployment environment. To enhance performance while preserving model efficiency, we integrate knowledge distillation, enabling the transfer of critical insights from a larger teacher model to a compact student model. Additionally, active learning reduces the required training data while maintaining competitive performance. We validated our framework on both benchmark data from the MS COCO dataset and in a simulated IoT environment. The results demonstrate significant improvements in energy efficiency and data transmission optimization, highlighting the practical applicability of our method in real-world IoT scenarios. 
    more » « less
    Free, publicly-accessible full text available January 22, 2026
  3. Free, publicly-accessible full text available February 26, 2026
  4. Free, publicly-accessible full text available January 1, 2026
  5. Free, publicly-accessible full text available February 26, 2026
  6. Link prediction is a crucial task in network analysis, but it has been shown to be prone to biased predictions, particularly when links are unfairly predicted between nodes from different sensitive groups. In this paper, we study the fair link prediction problem, which aims to ensure that the predicted link probability is independent of the sensitive attributes of the connected nodes. Existing methods typically incorporate debiasing techniques within graph embeddings to mitigate this issue. However, training on large real-world graphs is already challenging, and adding fairness constraints can further complicate the process. To overcome this challenge, we proposeFairLink, a method that learns a fairness-enhanced graph to bypass the need for debiasing during the link predictor's training.FairLinkmaintains link prediction accuracy by ensuring that the enhanced graph follows a training trajectory similar to that of the original input graph. Meanwhile, it enhances fairness by minimizing the absolute difference in link probabilities between node pairs within the same sensitive group and those between node pairs from different sensitive groups. Our extensive experiments on multiple large-scale graphs demonstrate thatFairLinknot only promotes fairness but also often achieves link prediction accuracy comparable to baseline methods. Most importantly, the enhanced graph exhibits strong generalizability across different GNN architectures.FairLinkis highly scalable, making it suitable for deployment in real-world large-scale graphs, where maintaining both fairness and accuracy is critical. 
    more » « less
  7. Processing In-Memory (PIM) is a data-centric computation paradigm that performs computations inside the memory, hence eliminating the memory wall problem in traditional computational paradigms used in Von-Neumann architectures. The associative processor, a type of PIM architecture, allows performing parallel and energy-efficient operations on vectors. This architecture is found useful in vector-based applications such as Hyper-Dimensional (HDC) Reinforcement Learning (RL). HDC is rising as a new powerful and lightweight alternative to costly traditional RL models such as Deep Q-Learning. The HDC implementation of Q-Learning relies on encoding the states in a high-dimensional representation where calculating Q-values and finding the maximum one can be done entirely in parallel. In this article, we propose to implement the main operations of a HDC RL framework on the associative processor. This acceleration achieves up to\(152.3\times\)and\(6.4\times\)energy and time savings compared to an FPGA implementation. Moreover, HDRLPIM shows that an SRAM-based AP implementation promises up to\(968.2\times\)energy-delay product gains compared to the FPGA implementation. 
    more » « less
    Free, publicly-accessible full text available October 31, 2025
  8. Free, publicly-accessible full text available January 1, 2026
  9. Free, publicly-accessible full text available November 1, 2025
  10. Abstract Increasing complexity, and requirements for the precise creation of parts, necessitate the use of computer numerical control (CNC) manufacturing. This process involves programmed instructions to remove material from a workpiece through operations such as milling, turning, and drilling. This manufacturing technique incorporates various process parameters (e.g., tools, spindle speed, feed rate, cut depth), leading to a highly complex operation. Additionally, interacting phenomena between the workpiece, tools, and environmental conditions further add to complexity which can lead to defects and poor product quality. Two main areas are of focus for an efficient automated system: monitoring and swift quality assessment. Within these areas, the critical aspects ascertaining the quality of a CNC manufacturing operation are: 1) Tool wear: the inherent deterioration of machine components caused by prolonged utilization, 2) Chatter: vibration that occurs during the machining process, and 3) Surface finish: the final product’s surface roughness. Many research domains tend to focus on just one of these areas while neglecting the interconnected influences of all three. Therefore, to capture a more holistic and comprehensive assessment of a manufacturing process, the overall product quality should be considered, as that’s what ultimately counts. The integration of CNC systems with in-situ monitoring devices such as acoustic sensors, high-speed cameras, and thermal cameras is aimed at understanding the underlying physical aspects of the CNC machining process, including tool wear, chatter, and surface roughness. The incorporation of these monitoring devices has allowed the use of artificial intelligence and machine learning (ML) in smart CNC systems with hopes of increasing productivity, minimizing downtime, and ensuring product quality. By capturing the underlying phenomena that occur during the manufacturing process, users hope to understand the interlinking dynamics for zero-defect automated manufacturing. However, even though the use of ML methods has yielded noteworthy results in analyzing in-situ process data for CNC manufacturing, the black-box nature of these models and their tendency to focus predominantly on single-task objectives rather than multi-task scenarios pose challenges. In real-world part creation and manufacturing scenarios, there is often a need to address multiple interconnected tasks simultaneously which demands models that can multitask effectively. Yet, many ML models designed and trained for singular objectives are limited in their applicability and efficiency in more complex multi-faceted environments. Addressing these challenges, we introduce MTaskHD, a novel multi-task framework, that leverages hyperdimensional computing (HDC) to effortlessly fuse data from various channels and process signals while characterizing quality within a multi-task manufacturing operation. Moreover, it yields interpretable outcomes, allowing users to understand the process behind predictions. In a real-world experiment conducted on a hybrid 5-axis CNC Deckel-Maho-Gildemeister, MTaskHD was implemented to forecast the quality of three distinct features: left 25.4 mm counterbore diameter, right 25.4 mm counterbore diameter, and 2.54 mm milled radius. Demonstrating remarkable performance, the model excelled in predicting the quality levels of all three features in its multi-task configuration with an F1-Score of 95.3%, outperforming alternative machine learning approaches, including support vector machines, Naïve Bayes, multi-layer perceptron, convolutional neural network, and time-LeNet. The inherent multi-task capability, robustness, and interpretability of HDC collectively offer a solution for comprehending intricate manufacturing dynamics and operations. 
    more » « less